56 research outputs found

    Cores of Cooperative Games in Information Theory

    Get PDF
    Cores of cooperative games are ubiquitous in information theory, and arise most frequently in the characterization of fundamental limits in various scenarios involving multiple users. Examples include classical settings in network information theory such as Slepian-Wolf source coding and multiple access channels, classical settings in statistics such as robust hypothesis testing, and new settings at the intersection of networking and statistics such as distributed estimation problems for sensor networks. Cooperative game theory allows one to understand aspects of all of these problems from a fresh and unifying perspective that treats users as players in a game, sometimes leading to new insights. At the heart of these analyses are fundamental dualities that have been long studied in the context of cooperative games; for information theoretic purposes, these are dualities between information inequalities on the one hand and properties of rate, capacity or other resource allocation regions on the other.Comment: 12 pages, published at http://www.hindawi.com/GetArticle.aspx?doi=10.1155/2008/318704 in EURASIP Journal on Wireless Communications and Networking, Special Issue on "Theory and Applications in Multiuser/Multiterminal Communications", April 200

    Concentration of the information in data with log-concave distributions

    Full text link
    A concentration property of the functional βˆ’log⁑f(X){-}\log f(X) is demonstrated, when a random vector X has a log-concave density f on Rn\mathbb{R}^n. This concentration property implies in particular an extension of the Shannon-McMillan-Breiman strong ergodic theorem to the class of discrete-time stochastic processes with log-concave marginals.Comment: Published in at http://dx.doi.org/10.1214/10-AOP592 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sumset and Inverse Sumset Inequalities for Differential Entropy and Mutual Information

    Get PDF
    The sumset and inverse sumset theories of Freiman, Pl\"{u}nnecke and Ruzsa, give bounds connecting the cardinality of the sumset A+B={a+bβ€…β€Š;β€…β€Ša∈A, b∈B}A+B=\{a+b\;;\;a\in A,\,b\in B\} of two discrete sets A,BA,B, to the cardinalities (or the finer structure) of the original sets A,BA,B. For example, the sum-difference bound of Ruzsa states that, ∣A+Bβˆ£β€‰βˆ£Aβˆ£β€‰βˆ£Bβˆ£β‰€βˆ£Aβˆ’B∣3|A+B|\,|A|\,|B|\leq|A-B|^3, where the difference set Aβˆ’B={aβˆ’bβ€…β€Š;β€…β€Ša∈A, b∈B}A-B= \{a-b\;;\;a\in A,\,b\in B\}. Interpreting the differential entropy h(X)h(X) of a continuous random variable XX as (the logarithm of) the size of the effective support of XX, the main contribution of this paper is a series of natural information-theoretic analogs for these results. For example, the Ruzsa sum-difference bound becomes the new inequality, h(X+Y)+h(X)+h(Y)≀3h(Xβˆ’Y)h(X+Y)+h(X)+h(Y)\leq 3h(X-Y), for any pair of independent continuous random variables XX and YY. Our results include differential-entropy versions of Ruzsa's triangle inequality, the Pl\"{u}nnecke-Ruzsa inequality, and the Balog-Szemer\'{e}di-Gowers lemma. Also we give a differential entropy version of the Freiman-Green-Ruzsa inverse-sumset theorem, which can be seen as a quantitative converse to the entropy power inequality. Versions of most of these results for the discrete entropy H(X)H(X) were recently proved by Tao, relying heavily on a strong, functional form of the submodularity property of H(X)H(X). Since differential entropy is {\em not} functionally submodular, in the continuous case many of the corresponding discrete proofs fail, in many cases requiring substantially new proof strategies. We find that the basic property that naturally replaces the discrete functional submodularity, is the data processing property of mutual information.Comment: 23 page

    Conditional R\'enyi entropy and the relationships between R\'enyi capacities

    Full text link
    The analogues of Arimoto's definition of conditional R\'enyi entropy and R\'enyi mutual information are explored for abstract alphabets. These quantities, although dependent on the reference measure, have some useful properties similar to those known in the discrete setting. In addition to laying out some such basic properties and the relations to R\'enyi divergences, the relationships between the families of mutual informations defined by Sibson, Augustin-Csisz\'ar, and Lapidoth-Pfister, as well as the corresponding capacities, are explored.Comment: 17 pages, 1 figur
    • …
    corecore